Not So Fast: Use PCI Express For Lower-Speed Interconnects, Too

Feb. 15, 2007
As PCI Express (PCIe) continues its rapid worldwide deployment, economies of scale are quickly driving down the cost of implementing of a basic, single-lane connection. PCIe has been accepted as the next generation of high-speed I/O interconnect, with sca

As PCI Express (PCIe) continues its rapid worldwide deployment, economies of scale are quickly driving down the cost of implementing of a basic, single-lane connection. PCIe has been accepted as the next generation of high-speed I/O interconnect, with scalable performance well into the multiple-gigabyte/second range. Yet many applications still need a low-cost connection across a backplane, from card to card or even from box to box.

A single-lane PCIe link will fill this requirement nicely, with bandwidth to spare. Applications such as controller area networks (CANs), simple storage and retrieval networks, industrial controllers, data-acquisition systems, and multiprocessor environments can take advantage of PCIe’s software compatibility with conventional PCI and implement robust interconnections with cost efficiency, scalable bandwidth, and simplicity of design.

Today’s PCIe switches and reverse-mode bridges already are being used to create low-cost, easy-to-design interconnects for chip-to-chip, card-to-card, and box-to-box data communications for wide variety of applications, including those that don’t require leading-edge performance.

PCIe was originally envisioned as a chip-to-chip interconnect, extending the performance of PCI by providing a serial interconnect that eliminated the clock skew associated with the routing of high-speed parallel bus signals while maintaining software compatibility with PCI. Figure 1 shows how PCIe is now used in both chip-to-chip and box-to-box interconnection schemes. On system boards or add-in cards, PCIe connects processors and I/O endpoints using switches (for I/O fan-out expansion) and PCIe-to-PCI bridges (for protocol translation between PCI and/or PCI-X).

Now, systems are using various cable standards, including a PCIe-specific standard cable, to allow box-to-box communication for both high-performance and low-cost applications. These systems take advantage of the scalability of the PCIe links to hit the appropriate performance/cost tradeoff. Cable schemes are now deployed with link widths ranging from a single PCIe lane (x1; allowing transfer rates up to 250 Mbytes/s) to 16 lanes (x16; up to 4 Gbytes/s) using standard PCIe switches and bridges as cable drivers.

These PCIe-based cabling systems can be created with relatively low-cost standard cables for communications, albeit over a limited distance, along with off-the-shelf PCIe switches and bridges. If longer distances are needed, optical transceivers are available for use in conjunction with PCIe switches or bridges.

Additionally, PCIe offers advanced features such as quality of service (QoS) via isochronous channels for guaranteed bandwidth delivery when required, advanced power management, native hot-plug/hot-swap support, spread-spectrum clocking for electromagnetic-interference (EMI) reduction, and end-to-end cyclic redundancy code (CRC) for enhanced data integrity. Imagine a PCIe native microphone sending digital audio signals down a x1 link into an all-digital mixing console—no bandwidth loss, hum, noise, nor distortion associated with the cable anymore!

Figure 2 shows a low-cost box-to-box connection, which PLX has demonstrated with its PEX 8111 bridge, operating for a few meters over low-cost CAT7 cable. It allows cost-efficient legacy PCI systems to communicate over a x1 PCIe link. In this topology, the host system uses a PCIe-to-PCI bridge in “reverse” mode, while the slave system uses a bridge in “forward” mode. Both bridges are on PCI add-in cards that sit on each system’s 32-bit PCI bus.

In the host system, the bridge’s upstream port is the PCI side of the bridge, requiring the bridge to be configured in “reverse mode.” In reverse mode, the host’s 32-bit PCI bus is the upstream port, while the PCIe x1 link is the downstream port of the host system’s bridge. This link directly drives the cable for a few meters without the need for a repeater.

The slave system’s forward-mode bridge is configured with the x1 PCIe link as its upstream port and the slave’s 32-bit PCI bus is its downstream port. This topology is useful for applications such as remote data storage, remote data acquisition, and systems that require physical or sound isolation between the user interface, the CPU, and/or the hard drives. Also, PCIe’s hot-plug feature enables the design of interchangeable I/O and storage modules or boxes with on-the-fly replacement.

One topology for connecting PCIe-native systems together is to use eight-lane PCIe switches. This approach has the advantage of no protocol translation (as encountered in Figure 2, where bridges translate PCI to PCIe and back to PCI again). This can lower the cost and latency associated with box-to-box communications.

The host system uses a five-port, eight-lane low-cost PCIe switch on an add-in card that plugs into the system board via a x4 PCIe slot. This permits several x1 PCIe cable connections to aggregate into a high-bandwidth upstream port (up to 1 Gbyte/s).

With these low-cost solutions, along with the use of CAT7 cable and RJ45 connectors, designers can get into the PCIe express lane without the cost.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!